56 research outputs found

    The LCG POOL Project, General Overview and Project Structure

    Full text link
    The POOL project has been created to implement a common persistency framework for the LHC Computing Grid (LCG) application area. POOL is tasked to store experiment data and meta data in the multi Petabyte area in a distributed and grid enabled way. First production use of new framework is expected for summer 2003. The project follows a hybrid approach combining C++ Object streaming technology such as ROOT I/O for the bulk data with a transactionally safe relational database (RDBMS) store such as MySQL. POOL is based a strict component approach - as laid down in the LCG persistency and blue print RTAG documents - providing navigational access to distributed data without exposing details of the particular storage technology. This contribution describes the project breakdown into work packages, the high level interaction between the main pool components and summarizes current status and plans.Comment: Talk from the 2003 Computing in High Energy and Nuclear Physics (CHEP03), La Jolla, Ca, USA, March 2003, 5 pages. PSN MOKT00

    Caching for dataset-based workloads with heterogeneous file sizes

    Get PDF
    International audienceCaching can effectively reduce the cost of serving content and improve the user experience. In this paper, we explore the benefits of caching for existing scientific workloads, taking the Worldwide LHC (Large Hadron Collider) Computing Grid as an example. It is a globally distributed system that stores and processes multiple hundred petabytes of data and serves the needs of thousands of scientists around the globe. Scientific computation differs from other applications like video streaming as file sizes vary from a few bytes to terabytes and logical links between the files affect user access patterns. These factors profoundly influence caches' performance and, therefore, should be carefully analyzed to select which caching policy to deploy or to design new ones. In this work, we study how the hierarchical organization of the LHC physics data into files and groups of files called datasets affects the request patterns. We then propose new caching policies that exploit dataset-specific knowledge and compare them with file-based ones. Moreover, we show that limited connectivity between the computing and storage sites leads to the delayed hits phenomenon and estimate the consequent reduction in the potential benefits of caching

    I/O performance studies of analysis workloads on production and dedicated resources at CERN

    Get PDF
    The recent evolutions of the analysis frameworks and physics data formats of the LHC experiments provide the opportunity of using central analysis facilities with a strong focus on interactivity and short turnaround times, to complement the more common distributed analysis on the Grid. In order to plan for such facilities, it is essential to know in detail the performance of the combination of a given analysis framework, of a specific analysis and of the installed computing and storage resources. This contribution describes performance studies performed at CERN, using the EOS disk-based storage, either directly or through an XCache instance, from both batch resources and highperformance compute nodes which could be used to build an analysis facility. A variety of benchmarks, both synthetic and based on real-world physics analyses and their corresponding input datasets, are utilized. In particular, the RNTuple format from the ROOT project is put to the test and compared to the latest version of the TTree format, and the impact of caches is assessed. In addition, we assessed the difference in performance between the use of storage system specific protocols, like XRootd, and FUSE. The results of this study are intended to be a valuable input in the design of analysis facilities, at CERN and elsewhere

    Jets and energy flow in photon-proton collisions at HERA

    Get PDF
    Properties of the hadronic final state in photoproduction events with large transverse energy are studied at the electron-proton collider HERA. Distributions of the transverse energy, jets and underlying event energy are compared to \overline{p}p data and QCD calculations. The comparisons show that the \gamma p events can be consistently described by QCD models including -- in addition to the primary hard scattering process -- interactions between the two beam remnants. The differential jet cross sections d\sigma/dE_T^{jet} and d\sigma/d\eta^{jet} are measured

    A Roadmap for HEP Software and Computing R&D for the 2020s

    Get PDF
    Particle physics has an ambitious and broad experimental programme for the coming decades. This programme requires large investments in detector hardware, either to build new facilities and experiments, or to upgrade existing ones. Similarly, it requires commensurate investment in the R&D of software to acquire, manage, process, and analyse the shear amounts of data to be recorded. In planning for the HL-LHC in particular, it is critical that all of the collaborating stakeholders agree on the software goals and priorities, and that the efforts complement each other. In this spirit, this white paper describes the R&D activities required to prepare for this software upgrade.Peer reviewe

    Разработка интерактивной моделирующей системы технологии низкотемпературной сепарации газа

    Get PDF
    We present a study of J ψ meson production in collisions of 26.7 GeV electrons with 820 GeV protons, performed with the H1-detector at the HERA collider at DESY. The J ψ mesons are detected via their leptonic decays both to electrons and muons. Requiring exactly two particles in the detector, a cross section of σ(ep → J ψ X) = (8.8±2.0±2.2) nb is determined for 30 GeV ≤ W γp ≤ 180 GeV and Q 2 ≲ 4 GeV 2 . Using the flux of quasi-real photons with Q 2 ≲ 4 GeV 2 , a total production cross section of σ ( γp → J / ψX ) = (56±13±14) nb is derived at an average W γp =90 GeV. The distribution of the squared momentum transfer t from the proton to the J ψ can be fitted using an exponential exp(− b ∥ t ∥) below a ∥ t ∥ of 0.75 GeV 2 yielding a slope parameter of b = (4.7±1.9) GeV −2

    An Open Data Policy for CERN… …and an important step in preserving the digital legacy of the Large Hadron Collider

    No full text
    This blog post, written at the occasion of the Worldwide Data Preservation Day, discusses the reality of an Open Data portal serving over 2 petabytes of data that has already proven its worth and the extension of this to a policy

    Computing in High Energy Physics

    No full text
    corecore